From: veerasena reddy <veeruyours@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: Accessing PCI BAR space directly from XEN hypervisor
Date: Tue, 14 Jun 2011 08:55:15 +0530 [thread overview]
Message-ID: <BANLkTimbjG40q6+UX06_tJG_kQxy=aykuQ@mail.gmail.com> (raw)
In-Reply-To: <BANLkTim_htK=MnD+iShPPNKO4iOkY6gf3A@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 2190 bytes --]
Hi,
Can someone please throw some light on this
Thanks in advance.
Regards,
VSR.
On Mon, Jun 13, 2011 at 2:24 PM, veerasena reddy <veeruyours@gmail.com>wrote:
> Hi,
>
> I am doing an experiment to see if I can get more throughput by moving the
> Tx ring update and doorbell ring mechanism to hypervisor and provide a
> hypercall to HVM Guest. When there is a Tx packet ready, the HVM guest
> invokes the hypercall along with the packet address as argument. The
> hypercall function will write the packet address to the Tx ring and rings
> the doorbell directly without involving dom0.
>
> AS part of this, i need to access IO Memory BAR region of the PCI network
> device from XEN hypervisor (hypercall handler). For this, i tried converting
> the BAR Physical address (captured from lspci as shown below) of the PCI
> device to MFN, but got all F's (invalid entry). Even pfn_valid(0xf0904000)
> failed on this address. Where as I could successfully map a dom0 page
> (allocated using vmalloc()) and access in hypervisor using
> map_domain_page(mfn).
>
> Could any one please suggest how to access the BAR regions of a PCI device
> from hypervisor? how to do mapping and read/write operations?
>
> Also, Could you please explain the difference between mapping locally
> allocated pages and PCI BAR memory of dom0 in hypervisor?
>
> Thanks in advance.
>
> Regards,
> VSR.
> ==========================
> [root@fc13 xen]# lspci -vvs 09:00.0
> 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B
> PCI Express Gigabit Ethernet controller (rev 03)
> Subsystem: Lenovo Device 2131
> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
> Stepping- SERR+ FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
> <MAbort- >SERR- <PERR- INTx-
> Latency: 0, Cache Line Size: 64 bytes
> Interrupt: pin A routed to IRQ 1247
> Region 0: I/O ports at 4000 [size=256]
> Region 2: Memory at f0904000 (64-bit, prefetchable) [size=4K]
> Region 4: Memory at f0900000 (64-bit, prefetchable) [size=16K]
> [virtual] Expansion ROM at f0920000 [disabled] [size=128K]
> ===========================
>
[-- Attachment #1.2: Type: text/html, Size: 2513 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2011-06-14 3:25 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-13 8:54 Accessing PCI BAR space directly from XEN hypervisor veerasena reddy
2011-06-14 3:25 ` veerasena reddy [this message]
2011-06-14 6:38 ` Keir Fraser
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='BANLkTimbjG40q6+UX06_tJG_kQxy=aykuQ@mail.gmail.com' \
--to=veeruyours@gmail.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).