From: "Jan Beulich" <JBeulich@novell.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
yu.zhao@intel.com, Ian Campbell <Ian.Campbell@eu.citrix.com>,
Ian Pratt <Ian.Pratt@eu.citrix.com>,
Keir Fraser <keir.fraser@eu.citrix.com>
Subject: Re: Re: [DOM0 KERNELS] pciback: Fix SR-IOV VF passthrough
Date: Tue, 02 Mar 2010 09:33:05 +0000 [thread overview]
Message-ID: <4B8CE9610200007800032180@vpn.id2.novell.com> (raw)
In-Reply-To: <20100301162026.GD7881@phenom.dumpdata.com>
[-- Attachment #1: Type: text/plain, Size: 456 bytes --]
>>> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 01.03.10 17:20 >>>
>On Mon, Mar 01, 2010 at 09:06:39AM +0000, Jan Beulich wrote:
>> Some parts of this we had been given by Intel, but some were also
>> implemented differently there. I'm reproducing the patch below, and
>
>Could attach it as an attachment? I get:
>
>patching file drivers/xen/pciback/conf_space_header.c
>patch: **** malformed patch at line 139: *data)
Here you go.
Jan
[-- Attachment #2: xen-pciback-sriov --]
[-- Type: application/octet-stream, Size: 3505 bytes --]
From: Zhao, Yu <yu.zhao@intel.com>
Subject: guest SR-IOV support for PV guest
Patch-mainline: n/a
These changes are for PV guest to use Virtual Function. Because the VF's
vendor, device registers in cfg space are 0xffff, which are invalid and
ignored by PCI device scan. Values in 'struct pci_dev' are fixed up by
SR-IOV code, and using these values will present correct VID and DID to
PV guest kernel.
And command registers in the cfg space are read only 0, which means we
have to emulate MMIO enable bit (VF only uses MMIO resource) so PV
kernel can work properly.
Acked-by: jbeulich@novell.com
--- head-2009-07-28.orig/drivers/xen/pciback/conf_space_header.c 2009-07-28 12:01:32.000000000 +0200
+++ head-2009-07-28/drivers/xen/pciback/conf_space_header.c 2009-07-29 11:03:07.000000000 +0200
@@ -18,6 +18,25 @@ struct pci_bar_info {
#define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO))
#define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER)
+static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data)
+{
+ int i;
+ int ret;
+
+ ret = pciback_read_config_word(dev, offset, value, data);
+ if (!atomic_read(&dev->enable_cnt))
+ return ret;
+
+ for (i = 0; i < PCI_ROM_RESOURCE; i++) {
+ if (dev->resource[i].flags & IORESOURCE_IO)
+ *value |= PCI_COMMAND_IO;
+ if (dev->resource[i].flags & IORESOURCE_MEM)
+ *value |= PCI_COMMAND_MEMORY;
+ }
+
+ return ret;
+}
+
static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
{
int err;
@@ -141,10 +160,26 @@ static inline void read_dev_bar(struct p
struct pci_bar_info *bar_info, int offset,
u32 len_mask)
{
- pci_read_config_dword(dev, offset, &bar_info->val);
- pci_write_config_dword(dev, offset, len_mask);
- pci_read_config_dword(dev, offset, &bar_info->len_val);
- pci_write_config_dword(dev, offset, bar_info->val);
+ int pos;
+ struct resource *res = dev->resource;
+
+ if (offset == PCI_ROM_ADDRESS || offset == PCI_ROM_ADDRESS1)
+ pos = PCI_ROM_RESOURCE;
+ else {
+ pos = (offset - PCI_BASE_ADDRESS_0) / 4;
+ if (pos && ((res[pos - 1].flags & (PCI_BASE_ADDRESS_SPACE |
+ PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
+ (PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_TYPE_64))) {
+ bar_info->val = res[pos - 1].start >> 32;
+ bar_info->len_val = res[pos - 1].end >> 32;
+ return;
+ }
+ }
+
+ bar_info->val = res[pos].start |
+ (res[pos].flags & PCI_REGION_FLAG_MASK);
+ bar_info->len_val = res[pos].end - res[pos].start + 1;
}
static void *bar_init(struct pci_dev *dev, int offset)
@@ -185,6 +220,22 @@ static void bar_release(struct pci_dev *
kfree(data);
}
+static int pciback_read_vendor(struct pci_dev *dev, int offset,
+ u16 *value, void *data)
+{
+ *value = dev->vendor;
+
+ return 0;
+}
+
+static int pciback_read_device(struct pci_dev *dev, int offset,
+ u16 *value, void *data)
+{
+ *value = dev->device;
+
+ return 0;
+}
+
static int interrupt_read(struct pci_dev *dev, int offset, u8 * value,
void *data)
{
@@ -212,9 +263,19 @@ static int bist_write(struct pci_dev *de
static const struct config_field header_common[] = {
{
+ .offset = PCI_VENDOR_ID,
+ .size = 2,
+ .u.w.read = pciback_read_vendor,
+ },
+ {
+ .offset = PCI_DEVICE_ID,
+ .size = 2,
+ .u.w.read = pciback_read_device,
+ },
+ {
.offset = PCI_COMMAND,
.size = 2,
- .u.w.read = pciback_read_config_word,
+ .u.w.read = command_read,
.u.w.write = command_write,
},
{
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
prev parent reply other threads:[~2010-03-02 9:33 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-02-26 17:25 [DOM0 KERNELS] pciback: Fix SR-IOV VF passthrough Keir Fraser
2010-02-26 20:51 ` Konrad Rzeszutek Wilk
2010-03-01 9:06 ` Jan Beulich
2010-03-01 9:45 ` Keir Fraser
2010-03-01 16:20 ` Konrad Rzeszutek Wilk
2010-03-01 16:49 ` Keir Fraser
2010-03-01 19:12 ` Konrad Rzeszutek Wilk
2010-03-01 22:21 ` Keir Fraser
2010-03-02 9:33 ` Jan Beulich [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B8CE9610200007800032180@vpn.id2.novell.com \
--to=jbeulich@novell.com \
--cc=Ian.Campbell@eu.citrix.com \
--cc=Ian.Pratt@eu.citrix.com \
--cc=jeremy@goop.org \
--cc=keir.fraser@eu.citrix.com \
--cc=konrad.wilk@oracle.com \
--cc=xen-devel@lists.xensource.com \
--cc=yu.zhao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).