* Re: [RFC] Proposal of QEMU PCI Endpoint test environment [not found] <CANXvt5oKt=AKdqv24LT079e+6URnfqJcfTJh0ajGA17paJUEKw@mail.gmail.com> @ 2023-08-23 6:09 ` Manivannan Sadhasivam 2023-08-25 8:56 ` Shunsuke Mie 2023-09-21 9:11 ` Kishon Vijay Abraham I 1 sibling, 1 reply; 12+ messages in thread From: Manivannan Sadhasivam @ 2023-08-23 6:09 UTC (permalink / raw) To: Shunsuke Mie Cc: Lorenzo Pieralisi, Michael S. Tsirkin, Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Kishon Vijay Abraham I On Fri, Aug 18, 2023 at 10:46:02PM +0900, Shunsuke Mie wrote: > Hi all, > > We are proposing to add a new test syste to Linux for PCIe Endpoint. That > can be run on QEMU without real hardware. At present, partially we have > confirmed that pci-epf-test is working, but it is not yet complete. > However, we would appreciate your comments on the architecture design. > > # Background > The background is as follows. > > PCI Endpoint function driver is implemented using the PCIe Endpoint > framework, but it requires physical boards for testing, and it is difficult > to test sufficiently. In order to find bugs and hardware-dependent > implementations early, continuous testing is required. Since it is > difficult to automate tests that require hardware, this RFC proposes a > virtual environment for testing PCI endpoint function drivers. > This sounds exciting to me and yes, it is going to be really helpful for validating EP framework as a whole. > # Architecture > The overview of the architecture is as follows. > > Guest 1 Guest 2 > +-------------------------+ +----------------------------+ > | Linux kernel | | Linux kernel | > | | | | > | PCI EP function driver | | | > | (e.g. pci-epf-test) | | | > |-------------------------| | PCI Device Driver | > | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | > +-------------------------+ +----------------------------+ > +-------------------------+ +----------------------------+ > | QEMU | | QEMU | > |-------------------------| |----------------------------| > | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | > +-------------------------+ +----------------------------+ > > At present, it is designed to work guests only on the same host, and > communication is done through Unix domain sockets. > > The three parts shown in the figure were introduced this time. > > (1) QEMU PCI Endpoint Controller(EPC) Device > PCI Endpoint Controller implemented as QEMU PCI device. > (2) QEMU PCI Endpoint Controller(EPC) Driver > Linux kernel driver that drives the device (1). It registers a epc device > to linux kernel and handling each operations for the epc device. > (3) QEMU PCI Endpoint function(EPF) Bridge Device > QEMU PCI device that cooperates with (1) and performs accesses to pci > configuration space, BAR and memory space to communicate each guests, and > generates interruptions to the guest 1. > I'm not very familiar with Qemu, but why can't the existing Qemu PCIe host controller devices used for EP communication? I mean, what is the need for a dedicated EPF bridge device (3) in host? (Guest 2 as per your diagram). Is that because you use socket communication between EP and host? - Mani > Each projects are: > (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 > files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} > (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc > files: drivers/pci/controller/pcie-qemu-ep.c > > # Protocol > > PCI, PCIe has a layer structure that includes Physical, Data Lane and > Transaction. The communicates between the bridge(3) and controller (1) > mimic the Transaction. Specifically, a protocol is implemented for > exchanging fd for communication protocol version check and communication, > in addition to the interaction equivalent to PCIe Transaction Layer Packet > (Read and Write of I/O, Memory, Configuration space and Message). In my > mind, we need to discuss the communication mor. > > We also are planning to post the patch set after the code is organized and > the protocol discussion is matured. > > Best regards, > Shunsuke -- மணிவண்ணன் சதாசிவம் ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-08-23 6:09 ` [RFC] Proposal of QEMU PCI Endpoint test environment Manivannan Sadhasivam @ 2023-08-25 8:56 ` Shunsuke Mie 0 siblings, 0 replies; 12+ messages in thread From: Shunsuke Mie @ 2023-08-25 8:56 UTC (permalink / raw) To: Manivannan Sadhasivam Cc: Lorenzo Pieralisi, Michael S. Tsirkin, Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Kishon Vijay Abraham I On 2023/08/23 15:09, Manivannan Sadhasivam wrote: > On Fri, Aug 18, 2023 at 10:46:02PM +0900, Shunsuke Mie wrote: >> Hi all, >> >> We are proposing to add a new test syste to Linux for PCIe Endpoint. That >> can be run on QEMU without real hardware. At present, partially we have >> confirmed that pci-epf-test is working, but it is not yet complete. >> However, we would appreciate your comments on the architecture design. >> >> # Background >> The background is as follows. >> >> PCI Endpoint function driver is implemented using the PCIe Endpoint >> framework, but it requires physical boards for testing, and it is difficult >> to test sufficiently. In order to find bugs and hardware-dependent >> implementations early, continuous testing is required. Since it is >> difficult to automate tests that require hardware, this RFC proposes a >> virtual environment for testing PCI endpoint function drivers. >> > This sounds exciting to me and yes, it is going to be really helpful for > validating EP framework as a whole. > >> # Architecture >> The overview of the architecture is as follows. >> >> Guest 1 Guest 2 >> +-------------------------+ +----------------------------+ >> | Linux kernel | | Linux kernel | >> | | | | >> | PCI EP function driver | | | >> | (e.g. pci-epf-test) | | | >> |-------------------------| | PCI Device Driver | >> | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | >> +-------------------------+ +----------------------------+ >> +-------------------------+ +----------------------------+ >> | QEMU | | QEMU | >> |-------------------------| |----------------------------| >> | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | >> +-------------------------+ +----------------------------+ >> >> At present, it is designed to work guests only on the same host, and >> communication is done through Unix domain sockets. >> >> The three parts shown in the figure were introduced this time. >> >> (1) QEMU PCI Endpoint Controller(EPC) Device >> PCI Endpoint Controller implemented as QEMU PCI device. >> (2) QEMU PCI Endpoint Controller(EPC) Driver >> Linux kernel driver that drives the device (1). It registers a epc device >> to linux kernel and handling each operations for the epc device. >> (3) QEMU PCI Endpoint function(EPF) Bridge Device >> QEMU PCI device that cooperates with (1) and performs accesses to pci >> configuration space, BAR and memory space to communicate each guests, and >> generates interruptions to the guest 1. >> > I'm not very familiar with Qemu, but why can't the existing Qemu PCIe host > controller devices used for EP communication? I mean, what is the need for a > dedicated EPF bridge device (3) in host? (Guest 2 as per your diagram). > > Is that because you use socket communication between EP and host? At least, the part that communicates with (1) is necessary, but I don't know if the current implementation is appropriate. In addition, there is a performance issue, so I am currently investigating QEMU more. e.g. pci emulation, shared-memory, etc. I'd like to improve and submit a next rfc. Thanks, Shunsuke Mie > - Mani > >> Each projects are: >> (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 >> files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} >> (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc >> files: drivers/pci/controller/pcie-qemu-ep.c >> >> # Protocol >> >> PCI, PCIe has a layer structure that includes Physical, Data Lane and >> Transaction. The communicates between the bridge(3) and controller (1) >> mimic the Transaction. Specifically, a protocol is implemented for >> exchanging fd for communication protocol version check and communication, >> in addition to the interaction equivalent to PCIe Transaction Layer Packet >> (Read and Write of I/O, Memory, Configuration space and Message). In my >> mind, we need to discuss the communication mor. >> >> We also are planning to post the patch set after the code is organized and >> the protocol discussion is matured. >> >> Best regards, >> Shunsuke ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment [not found] <CANXvt5oKt=AKdqv24LT079e+6URnfqJcfTJh0ajGA17paJUEKw@mail.gmail.com> 2023-08-23 6:09 ` [RFC] Proposal of QEMU PCI Endpoint test environment Manivannan Sadhasivam @ 2023-09-21 9:11 ` Kishon Vijay Abraham I 2023-09-26 7:26 ` Christoph Hellwig 2023-09-26 9:47 ` Shunsuke Mie 1 sibling, 2 replies; 12+ messages in thread From: Kishon Vijay Abraham I @ 2023-09-21 9:11 UTC (permalink / raw) To: Shunsuke Mie, Lorenzo Pieralisi, Michael S. Tsirkin, vaishnav.a Cc: Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Manivannan Sadhasivam, Kishon Vijay Abraham I +Vaishnav Hi Shunsuke, On 8/18/2023 7:16 PM, Shunsuke Mie wrote: > Hi all, > > We are proposing to add a new test syste to Linux for PCIe Endpoint. That > can be run on QEMU without real hardware. At present, partially we have > confirmed that pci-epf-test is working, but it is not yet complete. > However, we would appreciate your comments on the architecture design. > > # Background > The background is as follows. > > PCI Endpoint function driver is implemented using the PCIe Endpoint > framework, but it requires physical boards for testing, and it is difficult > to test sufficiently. In order to find bugs and hardware-dependent > implementations early, continuous testing is required. Since it is > difficult to automate tests that require hardware, this RFC proposes a > virtual environment for testing PCI endpoint function drivers. This would be quite useful and thank you for attempting it! I would like to compare other mechanisms available in-addition to QEMU before going with the QEMU approach. Though I don't understand this fully, Looking at https://osseu2023.sched.com/event/1OGk8/emulating-devices-in-linux-using-greybus-subsystem-vaishnav-mohandas-achath-texas-instruments, Vaishnav seems to solve the same problem using greybus for multiple type s of devices. Vaishnav, we'd wait for your OSS presentation but do you have any initial thoughts on how greybus could be used to test PCIe endpoint drivers? Thanks, Kishon > > # Architecture > The overview of the architecture is as follows. > > Guest 1 Guest 2 > +-------------------------+ +----------------------------+ > | Linux kernel | | Linux kernel | > | | | | > | PCI EP function driver | | | > | (e.g. pci-epf-test) | | | > |-------------------------| | PCI Device Driver | > | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | > +-------------------------+ +----------------------------+ > +-------------------------+ +----------------------------+ > | QEMU | | QEMU | > |-------------------------| |----------------------------| > | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | > +-------------------------+ +----------------------------+ > > At present, it is designed to work guests only on the same host, and > communication is done through Unix domain sockets. > > The three parts shown in the figure were introduced this time. > > (1) QEMU PCI Endpoint Controller(EPC) Device > PCI Endpoint Controller implemented as QEMU PCI device. > (2) QEMU PCI Endpoint Controller(EPC) Driver > Linux kernel driver that drives the device (1). It registers a epc device > to linux kernel and handling each operations for the epc device. > (3) QEMU PCI Endpoint function(EPF) Bridge Device > QEMU PCI device that cooperates with (1) and performs accesses to pci > configuration space, BAR and memory space to communicate each guests, and > generates interruptions to the guest 1. > > Each projects are: > (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 > <https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1> > files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} > (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc > <https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc> > files: drivers/pci/controller/pcie-qemu-ep.c > > # Protocol > > PCI, PCIe has a layer structure that includes Physical, Data Lane and > Transaction. The communicates between the bridge(3) and controller (1) > mimic the Transaction. Specifically, a protocol is implemented for > exchanging fd for communication protocol version check and communication, > in addition to the interaction equivalent to PCIe Transaction Layer Packet > (Read and Write of I/O, Memory, Configuration space and Message). In my > mind, we need to discuss the communication mor. > > We also are planning to post the patch set after the code is organized and > the protocol discussion is matured. > > Best regards, > Shunsuke ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-09-21 9:11 ` Kishon Vijay Abraham I @ 2023-09-26 7:26 ` Christoph Hellwig 2023-09-26 9:47 ` Shunsuke Mie 1 sibling, 0 replies; 12+ messages in thread From: Christoph Hellwig @ 2023-09-26 7:26 UTC (permalink / raw) To: Kishon Vijay Abraham I Cc: Shunsuke Mie, Lorenzo Pieralisi, Michael S. Tsirkin, vaishnav.a, Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Manivannan Sadhasivam, Kishon Vijay Abraham I On Thu, Sep 21, 2023 at 02:41:54PM +0530, Kishon Vijay Abraham I wrote: > > PCI Endpoint function driver is implemented using the PCIe Endpoint > > framework, but it requires physical boards for testing, and it is difficult > > to test sufficiently. In order to find bugs and hardware-dependent > > implementations early, continuous testing is required. Since it is > > difficult to automate tests that require hardware, this RFC proposes a > > virtual environment for testing PCI endpoint function drivers. > > This would be quite useful and thank you for attempting it! I would like to > compare other mechanisms available in-addition to QEMU before going with the > QEMU approach. Well, the point of PCIe endpoint subsystem in vhost or similar is that you can use one and the same endpoint implementation. So you can debug it using qemu and the use it with a physical port, which would be really amazing. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-09-21 9:11 ` Kishon Vijay Abraham I 2023-09-26 7:26 ` Christoph Hellwig @ 2023-09-26 9:47 ` Shunsuke Mie 2023-09-26 12:40 ` Vaishnav Achath 1 sibling, 1 reply; 12+ messages in thread From: Shunsuke Mie @ 2023-09-26 9:47 UTC (permalink / raw) To: Kishon Vijay Abraham I, Lorenzo Pieralisi, Michael S. Tsirkin, vaishnav.a Cc: Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Manivannan Sadhasivam, Kishon Vijay Abraham I On 2023/09/21 18:11, Kishon Vijay Abraham I wrote: > +Vaishnav > > Hi Shunsuke, > > On 8/18/2023 7:16 PM, Shunsuke Mie wrote: >> Hi all, >> >> We are proposing to add a new test syste to Linux for PCIe Endpoint. >> That >> can be run on QEMU without real hardware. At present, partially we have >> confirmed that pci-epf-test is working, but it is not yet complete. >> However, we would appreciate your comments on the architecture design. >> >> # Background >> The background is as follows. >> >> PCI Endpoint function driver is implemented using the PCIe Endpoint >> framework, but it requires physical boards for testing, and it is >> difficult >> to test sufficiently. In order to find bugs and hardware-dependent >> implementations early, continuous testing is required. Since it is >> difficult to automate tests that require hardware, this RFC proposes a >> virtual environment for testing PCI endpoint function drivers. > > This would be quite useful and thank you for attempting it! I would > like to compare other mechanisms available in-addition to QEMU before > going with the QEMU approach. I got it. I'll make a table to compare some methods that includes greybus to realize this emulation environment. Best, Shunsuke > Though I don't understand this fully, Looking at > https://osseu2023.sched.com/event/1OGk8/emulating-devices-in-linux-using-greybus-subsystem-vaishnav-mohandas-achath-texas-instruments, > Vaishnav seems to solve the same problem using greybus for multiple > type s of devices. > > Vaishnav, we'd wait for your OSS presentation but do you have any > initial thoughts on how greybus could be used to test PCIe endpoint > drivers? > > Thanks, > Kishon > >> >> # Architecture >> The overview of the architecture is as follows. >> >> Guest 1 Guest 2 >> +-------------------------+ +----------------------------+ >> | Linux kernel | | Linux kernel | >> | | | | >> | PCI EP function driver | | | >> | (e.g. pci-epf-test) | | | >> |-------------------------| | PCI Device Driver | >> | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | >> +-------------------------+ +----------------------------+ >> +-------------------------+ +----------------------------+ >> | QEMU | | QEMU | >> |-------------------------| |----------------------------| >> | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | >> +-------------------------+ +----------------------------+ >> >> At present, it is designed to work guests only on the same host, and >> communication is done through Unix domain sockets. >> >> The three parts shown in the figure were introduced this time. >> >> (1) QEMU PCI Endpoint Controller(EPC) Device >> PCI Endpoint Controller implemented as QEMU PCI device. >> (2) QEMU PCI Endpoint Controller(EPC) Driver >> Linux kernel driver that drives the device (1). It registers a epc >> device >> to linux kernel and handling each operations for the epc device. >> (3) QEMU PCI Endpoint function(EPF) Bridge Device >> QEMU PCI device that cooperates with (1) and performs accesses to pci >> configuration space, BAR and memory space to communicate each guests, >> and >> generates interruptions to the guest 1. >> >> Each projects are: >> (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 >> <https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1> >> files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} >> (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc >> <https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc> >> files: drivers/pci/controller/pcie-qemu-ep.c >> >> # Protocol >> >> PCI, PCIe has a layer structure that includes Physical, Data Lane and >> Transaction. The communicates between the bridge(3) and controller (1) >> mimic the Transaction. Specifically, a protocol is implemented for >> exchanging fd for communication protocol version check and >> communication, >> in addition to the interaction equivalent to PCIe Transaction Layer >> Packet >> (Read and Write of I/O, Memory, Configuration space and Message). In my >> mind, we need to discuss the communication mor. >> >> We also are planning to post the patch set after the code is >> organized and >> the protocol discussion is matured. >> >> Best regards, >> Shunsuke ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-09-26 9:47 ` Shunsuke Mie @ 2023-09-26 12:40 ` Vaishnav Achath 2023-10-03 4:56 ` Shunsuke Mie 0 siblings, 1 reply; 12+ messages in thread From: Vaishnav Achath @ 2023-09-26 12:40 UTC (permalink / raw) To: Shunsuke Mie, Kishon Vijay Abraham I, Lorenzo Pieralisi, Michael S. Tsirkin Cc: Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Manivannan Sadhasivam, Kishon Vijay Abraham I Hi Kishon, all, On 26/09/23 15:17, Shunsuke Mie wrote: > > On 2023/09/21 18:11, Kishon Vijay Abraham I wrote: >> +Vaishnav >> >> Hi Shunsuke, >> >> On 8/18/2023 7:16 PM, Shunsuke Mie wrote: >>> Hi all, >>> >>> We are proposing to add a new test syste to Linux for PCIe Endpoint. That >>> can be run on QEMU without real hardware. At present, partially we have >>> confirmed that pci-epf-test is working, but it is not yet complete. >>> However, we would appreciate your comments on the architecture design. >>> >>> # Background >>> The background is as follows. >>> >>> PCI Endpoint function driver is implemented using the PCIe Endpoint >>> framework, but it requires physical boards for testing, and it is difficult >>> to test sufficiently. In order to find bugs and hardware-dependent >>> implementations early, continuous testing is required. Since it is >>> difficult to automate tests that require hardware, this RFC proposes a >>> virtual environment for testing PCI endpoint function drivers. >> >> This would be quite useful and thank you for attempting it! I would like to >> compare other mechanisms available in-addition to QEMU before going with the >> QEMU approach. > > I got it. I'll make a table to compare some methods that includes greybus to > realize this emulation environment. > > > Best, > > Shunsuke > >> Though I don't understand this fully, Looking at >> https://osseu2023.sched.com/event/1OGk8/emulating-devices-in-linux-using-greybus-subsystem-vaishnav-mohandas-achath-texas-instruments, Vaishnav seems to solve the same problem using greybus for multiple type s of devices. >> >> Vaishnav, we'd wait for your OSS presentation but do you have any initial >> thoughts on how greybus could be used to test PCIe endpoint drivers? >> Apologies for the delay, I don't think greybus can be used for PCIe testing as there is no greybus equivalent for PCIe[1], it can only be used for relatively simpler devices today, I guess roadtest(UML based)[2] could be an alternative in this case. 1 - https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/greybus 2 - https://lore.kernel.org/lkml/YjN1ksNGujV611Ka@sirena.org.uk/ Thanks and Regards, Vaishnav >> Thanks, >> Kishon >> >>> >>> # Architecture >>> The overview of the architecture is as follows. >>> >>> Guest 1 Guest 2 >>> +-------------------------+ +----------------------------+ >>> | Linux kernel | | Linux kernel | >>> | | | | >>> | PCI EP function driver | | | >>> | (e.g. pci-epf-test) | | | >>> |-------------------------| | PCI Device Driver | >>> | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | >>> +-------------------------+ +----------------------------+ >>> +-------------------------+ +----------------------------+ >>> | QEMU | | QEMU | >>> |-------------------------| |----------------------------| >>> | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | >>> +-------------------------+ +----------------------------+ >>> >>> At present, it is designed to work guests only on the same host, and >>> communication is done through Unix domain sockets. >>> >>> The three parts shown in the figure were introduced this time. >>> >>> (1) QEMU PCI Endpoint Controller(EPC) Device >>> PCI Endpoint Controller implemented as QEMU PCI device. >>> (2) QEMU PCI Endpoint Controller(EPC) Driver >>> Linux kernel driver that drives the device (1). It registers a epc device >>> to linux kernel and handling each operations for the epc device. >>> (3) QEMU PCI Endpoint function(EPF) Bridge Device >>> QEMU PCI device that cooperates with (1) and performs accesses to pci >>> configuration space, BAR and memory space to communicate each guests, and >>> generates interruptions to the guest 1. >>> >>> Each projects are: >>> (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 >>> <https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1> >>> files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} >>> (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc >>> <https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc> >>> files: drivers/pci/controller/pcie-qemu-ep.c >>> >>> # Protocol >>> >>> PCI, PCIe has a layer structure that includes Physical, Data Lane and >>> Transaction. The communicates between the bridge(3) and controller (1) >>> mimic the Transaction. Specifically, a protocol is implemented for >>> exchanging fd for communication protocol version check and communication, >>> in addition to the interaction equivalent to PCIe Transaction Layer Packet >>> (Read and Write of I/O, Memory, Configuration space and Message). In my >>> mind, we need to discuss the communication mor. >>> >>> We also are planning to post the patch set after the code is organized and >>> the protocol discussion is matured. >>> >>> Best regards, >>> Shunsuke ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-09-26 12:40 ` Vaishnav Achath @ 2023-10-03 4:56 ` Shunsuke Mie 2023-10-03 14:31 ` Jiri Kastner 0 siblings, 1 reply; 12+ messages in thread From: Shunsuke Mie @ 2023-10-03 4:56 UTC (permalink / raw) To: Vaishnav Achath, Kishon Vijay Abraham I, Lorenzo Pieralisi, Michael S. Tsirkin Cc: Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Manivannan Sadhasivam, Kishon Vijay Abraham I Hi Vaishnav, On 2023/09/26 21:40, Vaishnav Achath wrote: > Hi Kishon, all, > > On 26/09/23 15:17, Shunsuke Mie wrote: >> On 2023/09/21 18:11, Kishon Vijay Abraham I wrote: >>> +Vaishnav >>> >>> Hi Shunsuke, >>> >>> On 8/18/2023 7:16 PM, Shunsuke Mie wrote: >>>> Hi all, >>>> >>>> We are proposing to add a new test syste to Linux for PCIe Endpoint. That >>>> can be run on QEMU without real hardware. At present, partially we have >>>> confirmed that pci-epf-test is working, but it is not yet complete. >>>> However, we would appreciate your comments on the architecture design. >>>> >>>> # Background >>>> The background is as follows. >>>> >>>> PCI Endpoint function driver is implemented using the PCIe Endpoint >>>> framework, but it requires physical boards for testing, and it is difficult >>>> to test sufficiently. In order to find bugs and hardware-dependent >>>> implementations early, continuous testing is required. Since it is >>>> difficult to automate tests that require hardware, this RFC proposes a >>>> virtual environment for testing PCI endpoint function drivers. >>> This would be quite useful and thank you for attempting it! I would like to >>> compare other mechanisms available in-addition to QEMU before going with the >>> QEMU approach. >> I got it. I'll make a table to compare some methods that includes greybus to >> realize this emulation environment. >> >> >> Best, >> >> Shunsuke >> >>> Though I don't understand this fully, Looking at >>> https://osseu2023.sched.com/event/1OGk8/emulating-devices-in-linux-using-greybus-subsystem-vaishnav-mohandas-achath-texas-instruments, Vaishnav seems to solve the same problem using greybus for multiple type s of devices. >>> >>> Vaishnav, we'd wait for your OSS presentation but do you have any initial >>> thoughts on how greybus could be used to test PCIe endpoint drivers? >>> > Apologies for the delay, I don't think greybus can be used for PCIe testing as > there is no greybus equivalent for PCIe[1], it can only be used for relatively > simpler devices today, I guess roadtest(UML based)[2] could be an alternative in > this case. Thank you for your comment. To my understanding, the roadtest uses UML and it interact with hardware model written in python to do testing. This would be grate for automated testing to test drivers and subsystems. For this PCIe endpoint, I think we need to hosts, one that works as a PCIe endpoint and one that is a PCIe Root Complex to it. Is it possible to realize the system? like: UML + PCIe endpoint function driver <-> python HW model (PCI Endpoint controller) <-> UML + pci driver for the function As another option, I'm considering the feasibility of dummy PCIe EPC driver. It works as a PCIe EPC device in kernel and show pci device according to the PCIe endpoint function driver to the same host. so It could be realize a single host and test the function driver. Best, Shunsuke > 1 - > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/greybus > 2 - https://lore.kernel.org/lkml/YjN1ksNGujV611Ka@sirena.org.uk/ > > Thanks and Regards, > Vaishnav > >>> Thanks, >>> Kishon >>> >>>> # Architecture >>>> The overview of the architecture is as follows. >>>> >>>> Guest 1 Guest 2 >>>> +-------------------------+ +----------------------------+ >>>> | Linux kernel | | Linux kernel | >>>> | | | | >>>> | PCI EP function driver | | | >>>> | (e.g. pci-epf-test) | | | >>>> |-------------------------| | PCI Device Driver | >>>> | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | >>>> +-------------------------+ +----------------------------+ >>>> +-------------------------+ +----------------------------+ >>>> | QEMU | | QEMU | >>>> |-------------------------| |----------------------------| >>>> | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | >>>> +-------------------------+ +----------------------------+ >>>> >>>> At present, it is designed to work guests only on the same host, and >>>> communication is done through Unix domain sockets. >>>> >>>> The three parts shown in the figure were introduced this time. >>>> >>>> (1) QEMU PCI Endpoint Controller(EPC) Device >>>> PCI Endpoint Controller implemented as QEMU PCI device. >>>> (2) QEMU PCI Endpoint Controller(EPC) Driver >>>> Linux kernel driver that drives the device (1). It registers a epc device >>>> to linux kernel and handling each operations for the epc device. >>>> (3) QEMU PCI Endpoint function(EPF) Bridge Device >>>> QEMU PCI device that cooperates with (1) and performs accesses to pci >>>> configuration space, BAR and memory space to communicate each guests, and >>>> generates interruptions to the guest 1. >>>> >>>> Each projects are: >>>> (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 >>>> <https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1> >>>> files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} >>>> (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc >>>> <https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc> >>>> files: drivers/pci/controller/pcie-qemu-ep.c >>>> >>>> # Protocol >>>> >>>> PCI, PCIe has a layer structure that includes Physical, Data Lane and >>>> Transaction. The communicates between the bridge(3) and controller (1) >>>> mimic the Transaction. Specifically, a protocol is implemented for >>>> exchanging fd for communication protocol version check and communication, >>>> in addition to the interaction equivalent to PCIe Transaction Layer Packet >>>> (Read and Write of I/O, Memory, Configuration space and Message). In my >>>> mind, we need to discuss the communication mor. >>>> >>>> We also are planning to post the patch set after the code is organized and >>>> the protocol discussion is matured. >>>> >>>> Best regards, >>>> Shunsuke ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-10-03 4:56 ` Shunsuke Mie @ 2023-10-03 14:31 ` Jiri Kastner 0 siblings, 0 replies; 12+ messages in thread From: Jiri Kastner @ 2023-10-03 14:31 UTC (permalink / raw) To: Shunsuke Mie Cc: Vaishnav Achath, Kishon Vijay Abraham I, Lorenzo Pieralisi, Michael S. Tsirkin, Paolo Bonzini, Marcel Apfelbaum, qemu-devel, Rob Herring, Bjorn Helgaas, Linux Kernel Mailing List, linux-pci, Krzysztof Wilczyński, Manivannan Sadhasivam, Kishon Vijay Abraham I, Jagannathan Raman, Thanos Makatos, John Levon, William Henderson hi shunsuke, all, what about vfio-user + qemu? qemu already has libvfio-user as submodule. there is ongoing work to add qemu vfio-user client functionality. adding people involved to loop, not sure if i forgot somebody. regards jiri On Tue, Oct 03, 2023 at 01:56:03PM +0900, Shunsuke Mie wrote: > Hi Vaishnav, > > On 2023/09/26 21:40, Vaishnav Achath wrote: > > Hi Kishon, all, > > > > On 26/09/23 15:17, Shunsuke Mie wrote: > > > On 2023/09/21 18:11, Kishon Vijay Abraham I wrote: > > > > +Vaishnav > > > > > > > > Hi Shunsuke, > > > > > > > > On 8/18/2023 7:16 PM, Shunsuke Mie wrote: > > > > > Hi all, > > > > > > > > > > We are proposing to add a new test syste to Linux for PCIe Endpoint. That > > > > > can be run on QEMU without real hardware. At present, partially we have > > > > > confirmed that pci-epf-test is working, but it is not yet complete. > > > > > However, we would appreciate your comments on the architecture design. > > > > > > > > > > # Background > > > > > The background is as follows. > > > > > > > > > > PCI Endpoint function driver is implemented using the PCIe Endpoint > > > > > framework, but it requires physical boards for testing, and it is difficult > > > > > to test sufficiently. In order to find bugs and hardware-dependent > > > > > implementations early, continuous testing is required. Since it is > > > > > difficult to automate tests that require hardware, this RFC proposes a > > > > > virtual environment for testing PCI endpoint function drivers. > > > > This would be quite useful and thank you for attempting it! I would like to > > > > compare other mechanisms available in-addition to QEMU before going with the > > > > QEMU approach. > > > I got it. I'll make a table to compare some methods that includes greybus to > > > realize this emulation environment. > > > > > > > > > Best, > > > > > > Shunsuke > > > > > > > Though I don't understand this fully, Looking at > > > > https://osseu2023.sched.com/event/1OGk8/emulating-devices-in-linux-using-greybus-subsystem-vaishnav-mohandas-achath-texas-instruments, Vaishnav seems to solve the same problem using greybus for multiple type s of devices. > > > > > > > > Vaishnav, we'd wait for your OSS presentation but do you have any initial > > > > thoughts on how greybus could be used to test PCIe endpoint drivers? > > > > > > Apologies for the delay, I don't think greybus can be used for PCIe testing as > > there is no greybus equivalent for PCIe[1], it can only be used for relatively > > simpler devices today, I guess roadtest(UML based)[2] could be an alternative in > > this case. > > Thank you for your comment. > > To my understanding, the roadtest uses UML and it interact with hardware > model written in python to do testing. This would be grate for automated > testing to test drivers and subsystems. > > For this PCIe endpoint, I think we need to hosts, one that works as a PCIe > endpoint and one that is a PCIe Root Complex to it. Is it possible to > realize the system? > like: > UML + PCIe endpoint function driver <-> python HW model (PCI Endpoint > controller) <-> UML + pci driver for the function > > > As another option, I'm considering the feasibility of dummy PCIe EPC driver. > It works as a PCIe EPC device in kernel and show pci device according to the > PCIe endpoint function driver to the same host. so It could be realize a > single host and test the function driver. > > > Best, > > Shunsuke > > > 1 - > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/staging/greybus > > 2 - https://lore.kernel.org/lkml/YjN1ksNGujV611Ka@sirena.org.uk/ > > > > Thanks and Regards, > > Vaishnav > > > > > > Thanks, > > > > Kishon > > > > > > > > > # Architecture > > > > > The overview of the architecture is as follows. > > > > > > > > > > Guest 1 Guest 2 > > > > > +-------------------------+ +----------------------------+ > > > > > | Linux kernel | | Linux kernel | > > > > > | | | | > > > > > | PCI EP function driver | | | > > > > > | (e.g. pci-epf-test) | | | > > > > > |-------------------------| | PCI Device Driver | > > > > > | (2) QEMU EPC Driver | | (e.g. pci_endpoint_test) | > > > > > +-------------------------+ +----------------------------+ > > > > > +-------------------------+ +----------------------------+ > > > > > | QEMU | | QEMU | > > > > > |-------------------------| |----------------------------| > > > > > | (1) QEMU PCI EPC Device *----* (3) QEMU EPF Bridge Device | > > > > > +-------------------------+ +----------------------------+ > > > > > > > > > > At present, it is designed to work guests only on the same host, and > > > > > communication is done through Unix domain sockets. > > > > > > > > > > The three parts shown in the figure were introduced this time. > > > > > > > > > > (1) QEMU PCI Endpoint Controller(EPC) Device > > > > > PCI Endpoint Controller implemented as QEMU PCI device. > > > > > (2) QEMU PCI Endpoint Controller(EPC) Driver > > > > > Linux kernel driver that drives the device (1). It registers a epc device > > > > > to linux kernel and handling each operations for the epc device. > > > > > (3) QEMU PCI Endpoint function(EPF) Bridge Device > > > > > QEMU PCI device that cooperates with (1) and performs accesses to pci > > > > > configuration space, BAR and memory space to communicate each guests, and > > > > > generates interruptions to the guest 1. > > > > > > > > > > Each projects are: > > > > > (1), (3) https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1 > > > > > <https://github.com/ShunsukeMie/qemu/tree/epf-bridge/v1> > > > > > files: hw/misc/{qemu-epc.{c,h}, epf-bridge.c} > > > > > (2) https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc > > > > > <https://github.com/ShunsukeMie/linux-virtio-rdma/tree/qemu-epc> > > > > > files: drivers/pci/controller/pcie-qemu-ep.c > > > > > > > > > > # Protocol > > > > > > > > > > PCI, PCIe has a layer structure that includes Physical, Data Lane and > > > > > Transaction. The communicates between the bridge(3) and controller (1) > > > > > mimic the Transaction. Specifically, a protocol is implemented for > > > > > exchanging fd for communication protocol version check and communication, > > > > > in addition to the interaction equivalent to PCIe Transaction Layer Packet > > > > > (Read and Write of I/O, Memory, Configuration space and Message). In my > > > > > mind, we need to discuss the communication mor. > > > > > > > > > > We also are planning to post the patch set after the code is organized and > > > > > the protocol discussion is matured. > > > > > > > > > > Best regards, > > > > > Shunsuke ^ permalink raw reply [flat|nested] 12+ messages in thread
[parent not found: <CAGNS4TbhS3XnCFAEi378+cSmJvGMdjN2oTv=tES36vbV4CaDuA@mail.gmail.com>]
[parent not found: <CANXvt5qKxfU3p1eSK4fkzRFRBXHSVvSkJrnQRLKPkQjhsMGNzQ@mail.gmail.com>]
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment [not found] ` <CANXvt5qKxfU3p1eSK4fkzRFRBXHSVvSkJrnQRLKPkQjhsMGNzQ@mail.gmail.com> @ 2023-10-05 7:02 ` Mattias Nissler 2023-10-06 11:51 ` Shunsuke Mie 0 siblings, 1 reply; 12+ messages in thread From: Mattias Nissler @ 2023-10-05 7:02 UTC (permalink / raw) To: Shunsuke Mie Cc: cz172638, bhelgaas, Jagannathan Raman, kishon, kvijayab, kw, levon, linux-kernel, linux-pci, lpieralisi, manivannan.sadhasivam, Marcel Apfelbaum, Michael S. Tsirkin, Paolo Bonzini, qemu-devel, robh, thanos.makatos, vaishnav.a, william.henderson On Thu, Oct 5, 2023 at 3:31 AM Shunsuke Mie <mie@igel.co.jp> wrote: > > Hi Jiri, Mattias and all. > > 2023年10月4日(水) 16:36 Mattias Nissler <mnissler@rivosinc.com>: >>> >>> hi shunsuke, all, >>> what about vfio-user + qemu? > > Thank you for the suggestion. > >> FWIW, I have had some good success using VFIO-user to bridge software components to hardware designs. For the most part, I have been hooking up software endpoint models to hardware design components speaking the PCIe transaction layer protocol. The central piece you need is a way to translate between the VFIO-user protocol and PCIe transaction layer messages, basically converting ECAM accesses, memory accesses (DMA+MMIO), and interrupts between the two worlds. I have some code which implements the basics of that. It's certainly far from complete (TLP is a massive protocol), but it works well enough for me. I believe we should be able to open-source this if there's interest, let me know. > > It is what I want to do, but I'm not familiar with the vfio and vfio-user, and I have a question. QEMU has a PCI TLP communication implementation for Multi-process QEMU[1]. It is similar to your success. I'm no qemu expert, but my understanding is that the plan is for the existing multi-process QEMU implementation to eventually be superseded/replaced by the VFIO-user based one (qemu folks, please correct me if I'm wrong). From a functional perspective they are more or less equivalent AFAICT. > The multi-process qemu also communicates TLP over UDS. Could you let me know your opinion about it? Note that neither multi-process qemu nor VFIO-user actually pass around TLPs, but rather have their own command language to encode ECAM, MMIO, DMA, interrupts etc. However, translation from/to TLP is possible and works well enough in my experience. > >> One thing to note is that there are currently some limits to bridging VFIO-user / TLP that I haven't figured out and/or will need further work: Advanced PCIe concepts like PASID, ATS/PRI, SR-IOV etc. may lack equivalents on the VFIO-user side that would have to be filled in. The folk behind libvfio-user[2] have been very approachable and open to improvements in my experience though. >> >> If I understand correctly, the specific goal here is testing PCIe endpoint designs against a Linux host. What you'd need for that is a PCI host controller for the Linux side to talk to and then hooking up endpoints on the transaction layer. QEMU can simulate host controllers that work with existing Linux drivers just fine. Then you can put a vfio-user-pci stub device (I don't think this has landed in qemu yet, but you can find the code at [1]) on the simulated PCI bus which will expose any software interactions with the endpoint as VFIO-user protocol messages over unix domain socket. The piece you need to bring is a VFIO-user server that handles these messages. Its task is basically translating between VFIO-user and TLP and then injecting TLP into your hardware design. > > Yes, If the pci host controller you said can be implemented, I can achieve my goal. I meant to say that the existing PCIe host controller implementations in qemu can be used as is. > > To begin with, I'll investigate the vfio and libvfio-user. Thanks!. > > [1] https://www.qemu.org/docs/master/system/multi-process.html > > Best, > Shunsuke >> >> >> [1] https://github.com/oracle/qemu/tree/vfio-user-p3.1 - I believe that's the latest version, Jagannathan Raman will know best >> [2] https://github.com/nutanix/libvfio-user >> > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-10-05 7:02 ` Mattias Nissler @ 2023-10-06 11:51 ` Shunsuke Mie 2023-10-06 12:00 ` Mattias Nissler 2023-10-06 12:07 ` Thanos Makatos 0 siblings, 2 replies; 12+ messages in thread From: Shunsuke Mie @ 2023-10-06 11:51 UTC (permalink / raw) To: Mattias Nissler Cc: cz172638, bhelgaas, Jagannathan Raman, kishon, kvijayab, kw, levon, linux-kernel, linux-pci, lpieralisi, manivannan.sadhasivam, Marcel Apfelbaum, Michael S. Tsirkin, Paolo Bonzini, qemu-devel, robh, thanos.makatos, vaishnav.a, william.henderson On 2023/10/05 16:02, Mattias Nissler wrote: > On Thu, Oct 5, 2023 at 3:31 AM Shunsuke Mie <mie@igel.co.jp> wrote: >> Hi Jiri, Mattias and all. >> >> 2023年10月4日(水) 16:36 Mattias Nissler <mnissler@rivosinc.com>: >>>> hi shunsuke, all, >>>> what about vfio-user + qemu? >> Thank you for the suggestion. >> >>> FWIW, I have had some good success using VFIO-user to bridge software components to hardware designs. For the most part, I have been hooking up software endpoint models to hardware design components speaking the PCIe transaction layer protocol. The central piece you need is a way to translate between the VFIO-user protocol and PCIe transaction layer messages, basically converting ECAM accesses, memory accesses (DMA+MMIO), and interrupts between the two worlds. I have some code which implements the basics of that. It's certainly far from complete (TLP is a massive protocol), but it works well enough for me. I believe we should be able to open-source this if there's interest, let me know. >> It is what I want to do, but I'm not familiar with the vfio and vfio-user, and I have a question. QEMU has a PCI TLP communication implementation for Multi-process QEMU[1]. It is similar to your success. > I'm no qemu expert, but my understanding is that the plan is for the > existing multi-process QEMU implementation to eventually be > superseded/replaced by the VFIO-user based one (qemu folks, please > correct me if I'm wrong). From a functional perspective they are more > or less equivalent AFAICT. > The project is promising. I found a session about the vfio adapts to Multi-process QEMU[1] in KVM Forun 2021, butI couldn't found some posted patches. If anyone knows status of this project, could you please let me know? [1] https://www.youtube.com/watch?v=NBT8rImx3VE >> The multi-process qemu also communicates TLP over UDS. Could you let me know your opinion about it? > Note that neither multi-process qemu nor VFIO-user actually pass > around TLPs, but rather have their own command language to encode > ECAM, MMIO, DMA, interrupts etc. However, translation from/to TLP is > possible and works well enough in my experience. I agree. >>> One thing to note is that there are currently some limits to bridging VFIO-user / TLP that I haven't figured out and/or will need further work: Advanced PCIe concepts like PASID, ATS/PRI, SR-IOV etc. may lack equivalents on the VFIO-user side that would have to be filled in. The folk behind libvfio-user[2] have been very approachable and open to improvements in my experience though. >>> >>> If I understand correctly, the specific goal here is testing PCIe endpoint designs against a Linux host. What you'd need for that is a PCI host controller for the Linux side to talk to and then hooking up endpoints on the transaction layer. QEMU can simulate host controllers that work with existing Linux drivers just fine. Then you can put a vfio-user-pci stub device (I don't think this has landed in qemu yet, but you can find the code at [1]) on the simulated PCI bus which will expose any software interactions with the endpoint as VFIO-user protocol messages over unix domain socket. The piece you need to bring is a VFIO-user server that handles these messages. Its task is basically translating between VFIO-user and TLP and then injecting TLP into your hardware design. >> Yes, If the pci host controller you said can be implemented, I can achieve my goal. > I meant to say that the existing PCIe host controller implementations > in qemu can be used as is. Sorry, I misunderstood. >> To begin with, I'll investigate the vfio and libvfio-user. Thanks!. >> >> [1] https://www.qemu.org/docs/master/system/multi-process.html >> >> Best, >> Shunsuke >>> >>> [1] https://github.com/oracle/qemu/tree/vfio-user-p3.1 - I believe that's the latest version, Jagannathan Raman will know best >>> [2] https://github.com/nutanix/libvfio-user >>> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-10-06 11:51 ` Shunsuke Mie @ 2023-10-06 12:00 ` Mattias Nissler 2023-10-06 12:07 ` Thanos Makatos 1 sibling, 0 replies; 12+ messages in thread From: Mattias Nissler @ 2023-10-06 12:00 UTC (permalink / raw) To: Shunsuke Mie Cc: cz172638, bhelgaas, Jagannathan Raman, kishon, kvijayab, kw, levon, linux-kernel, linux-pci, lpieralisi, manivannan.sadhasivam, Marcel Apfelbaum, Michael S. Tsirkin, Paolo Bonzini, qemu-devel, robh, thanos.makatos, vaishnav.a, william.henderson On Fri, Oct 6, 2023 at 1:51 PM Shunsuke Mie <mie@igel.co.jp> wrote: > > > On 2023/10/05 16:02, Mattias Nissler wrote: > > On Thu, Oct 5, 2023 at 3:31 AM Shunsuke Mie <mie@igel.co.jp> wrote: > >> Hi Jiri, Mattias and all. > >> > >> 2023年10月4日(水) 16:36 Mattias Nissler <mnissler@rivosinc.com>: > >>>> hi shunsuke, all, > >>>> what about vfio-user + qemu? > >> Thank you for the suggestion. > >> > >>> FWIW, I have had some good success using VFIO-user to bridge software components to hardware designs. For the most part, I have been hooking up software endpoint models to hardware design components speaking the PCIe transaction layer protocol. The central piece you need is a way to translate between the VFIO-user protocol and PCIe transaction layer messages, basically converting ECAM accesses, memory accesses (DMA+MMIO), and interrupts between the two worlds. I have some code which implements the basics of that. It's certainly far from complete (TLP is a massive protocol), but it works well enough for me. I believe we should be able to open-source this if there's interest, let me know. > >> It is what I want to do, but I'm not familiar with the vfio and vfio-user, and I have a question. QEMU has a PCI TLP communication implementation for Multi-process QEMU[1]. It is similar to your success. > > I'm no qemu expert, but my understanding is that the plan is for the > > existing multi-process QEMU implementation to eventually be > > superseded/replaced by the VFIO-user based one (qemu folks, please > > correct me if I'm wrong). From a functional perspective they are more > > or less equivalent AFAICT. > > > The project is promising. > > I found a session about the vfio adapts to Multi-process QEMU[1] in KVM > Forun 2021, butI couldn't found some posted patches. > If anyone knows status of this project, could you please let me know? Again, I'm just an interested bystander, so take my words with a grain of salt. That said, my understanding is that there is an intention to get the vfio-user client code into qemu in the foreseeable future. The most recent version of the code that I'm aware of is here: https://github.com/oracle/qemu/tree/vfio-user-p3.1 > > [1] https://www.youtube.com/watch?v=NBT8rImx3VE > >> The multi-process qemu also communicates TLP over UDS. Could you let me know your opinion about it? > > Note that neither multi-process qemu nor VFIO-user actually pass > > around TLPs, but rather have their own command language to encode > > ECAM, MMIO, DMA, interrupts etc. However, translation from/to TLP is > > possible and works well enough in my experience. > I agree. > >>> One thing to note is that there are currently some limits to bridging VFIO-user / TLP that I haven't figured out and/or will need further work: Advanced PCIe concepts like PASID, ATS/PRI, SR-IOV etc. may lack equivalents on the VFIO-user side that would have to be filled in. The folk behind libvfio-user[2] have been very approachable and open to improvements in my experience though. > >>> > >>> If I understand correctly, the specific goal here is testing PCIe endpoint designs against a Linux host. What you'd need for that is a PCI host controller for the Linux side to talk to and then hooking up endpoints on the transaction layer. QEMU can simulate host controllers that work with existing Linux drivers just fine. Then you can put a vfio-user-pci stub device (I don't think this has landed in qemu yet, but you can find the code at [1]) on the simulated PCI bus which will expose any software interactions with the endpoint as VFIO-user protocol messages over unix domain socket. The piece you need to bring is a VFIO-user server that handles these messages. Its task is basically translating between VFIO-user and TLP and then injecting TLP into your hardware design. > >> Yes, If the pci host controller you said can be implemented, I can achieve my goal. > > I meant to say that the existing PCIe host controller implementations > > in qemu can be used as is. > Sorry, I misunderstood. > >> To begin with, I'll investigate the vfio and libvfio-user. Thanks!. > >> > >> [1] https://www.qemu.org/docs/master/system/multi-process.html > >> > >> Best, > >> Shunsuke > >>> > >>> [1] https://github.com/oracle/qemu/tree/vfio-user-p3.1 - I believe that's the latest version, Jagannathan Raman will know best > >>> [2] https://github.com/nutanix/libvfio-user > >>> ^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [RFC] Proposal of QEMU PCI Endpoint test environment 2023-10-06 11:51 ` Shunsuke Mie 2023-10-06 12:00 ` Mattias Nissler @ 2023-10-06 12:07 ` Thanos Makatos 1 sibling, 0 replies; 12+ messages in thread From: Thanos Makatos @ 2023-10-06 12:07 UTC (permalink / raw) To: Shunsuke Mie, Mattias Nissler, Jagannathan Raman Cc: cz172638@gmail.com, bhelgaas@google.com, kishon@kernel.org, kvijayab@amd.com, kw@linux.com, levon@movementarian.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, lpieralisi@kernel.org, manivannan.sadhasivam@linaro.org, Marcel Apfelbaum, Michael S. Tsirkin, Paolo Bonzini, qemu-devel@nongnu.org, robh@kernel.org, vaishnav.a@ti.com, Elena Ufimtseva > -----Original Message----- > From: Shunsuke Mie <mie@igel.co.jp> > Sent: Friday, October 6, 2023 12:51 PM > To: Mattias Nissler <mnissler@rivosinc.com> > Cc: cz172638@gmail.com; bhelgaas@google.com; Jagannathan Raman > <jag.raman@oracle.com>; kishon@kernel.org; kvijayab@amd.com; > kw@linux.com; levon@movementarian.org; linux-kernel@vger.kernel.org; linux- > pci@vger.kernel.org; lpieralisi@kernel.org; manivannan.sadhasivam@linaro.org; > Marcel Apfelbaum <marcel.apfelbaum@gmail.com>; Michael S. Tsirkin > <mst@redhat.com>; Paolo Bonzini <pbonzini@redhat.com>; qemu- > devel@nongnu.org; robh@kernel.org; Thanos Makatos > <thanos.makatos@nutanix.com>; vaishnav.a@ti.com; William Henderson > <william.henderson@nutanix.com> > Subject: Re: [RFC] Proposal of QEMU PCI Endpoint test environment > > > On 2023/10/05 16:02, Mattias Nissler wrote: > > On Thu, Oct 5, 2023 at 3:31 AM Shunsuke Mie <mie@igel.co.jp> wrote: > >> Hi Jiri, Mattias and all. > >> > >> 2023年10月4日(水) 16:36 Mattias Nissler <mnissler@rivosinc.com>: > >>>> hi shunsuke, all, > >>>> what about vfio-user + qemu? > >> Thank you for the suggestion. > >> > >>> FWIW, I have had some good success using VFIO-user to bridge software > components to hardware designs. For the most part, I have been hooking up > software endpoint models to hardware design components speaking the PCIe > transaction layer protocol. The central piece you need is a way to translate > between the VFIO-user protocol and PCIe transaction layer messages, basically > converting ECAM accesses, memory accesses (DMA+MMIO), and interrupts > between the two worlds. I have some code which implements the basics of that. > It's certainly far from complete (TLP is a massive protocol), but it works well > enough for me. I believe we should be able to open-source this if there's interest, > let me know. > >> It is what I want to do, but I'm not familiar with the vfio and vfio-user, and I > have a question. QEMU has a PCI TLP communication implementation for Multi- > process QEMU[1]. It is similar to your success. > > I'm no qemu expert, but my understanding is that the plan is for the > > existing multi-process QEMU implementation to eventually be > > superseded/replaced by the VFIO-user based one (qemu folks, please > > correct me if I'm wrong). From a functional perspective they are more > > or less equivalent AFAICT. > > > The project is promising. > > I found a session about the vfio adapts to Multi-process QEMU[1] in KVM > Forun 2021, butI couldn't found some posted patches. > If anyone knows status of this project, could you please let me know? AFAIK the mp-qemu folk are working on continuing JJ's work to enable vfio-user client on QEMU, not sure about the timeline, Jag can you comment? You can still play around with their forked version of QEMU (which they use to post patches), the libvfio-user documentation explains how to use it. > > [1] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__www.youtube.com_watch-3Fv- > 3DNBT8rImx3VE&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=XTpYsh5Ps2zJvt > w6ogtti46atk736SI4vgsJiUKIyDE&m=lGbUrPHS1zjVVRi0P- > ukZ0TAabgefclz26Q56PgHzKw6oxFrtqgpv1g_aDF9hXZk&s=EU- > KA65Gk2jC3zhimGSX96Mfz3kxZnU7gqJ00F4G4DM&e= > >> The multi-process qemu also communicates TLP over UDS. Could you let me > know your opinion about it? > > Note that neither multi-process qemu nor VFIO-user actually pass > > around TLPs, but rather have their own command language to encode > > ECAM, MMIO, DMA, interrupts etc. However, translation from/to TLP is > > possible and works well enough in my experience. > I agree. > >>> One thing to note is that there are currently some limits to bridging VFIO-user > / TLP that I haven't figured out and/or will need further work: Advanced PCIe > concepts like PASID, ATS/PRI, SR-IOV etc. may lack equivalents on the VFIO-user > side that would have to be filled in. The folk behind libvfio-user[2] have been very > approachable and open to improvements in my experience though. > >>> > >>> If I understand correctly, the specific goal here is testing PCIe endpoint > designs against a Linux host. What you'd need for that is a PCI host controller for > the Linux side to talk to and then hooking up endpoints on the transaction layer. > QEMU can simulate host controllers that work with existing Linux drivers just fine. > Then you can put a vfio-user-pci stub device (I don't think this has landed in qemu > yet, but you can find the code at [1]) on the simulated PCI bus which will expose > any software interactions with the endpoint as VFIO-user protocol messages over > unix domain socket. The piece you need to bring is a VFIO-user server that > handles these messages. Its task is basically translating between VFIO-user and > TLP and then injecting TLP into your hardware design. > >> Yes, If the pci host controller you said can be implemented, I can achieve my > goal. > > I meant to say that the existing PCIe host controller implementations > > in qemu can be used as is. > Sorry, I misunderstood. > >> To begin with, I'll investigate the vfio and libvfio-user. Thanks!. > >> > >> [1] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__www.qemu.org_docs_master_system_multi- > 2Dprocess.html&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=XTpYsh5Ps2zJvtw > 6ogtti46atk736SI4vgsJiUKIyDE&m=lGbUrPHS1zjVVRi0P- > ukZ0TAabgefclz26Q56PgHzKw6oxFrtqgpv1g_aDF9hXZk&s=QW5gt2SlCvGU8T20L2 > PoPEX_weidbGfnxiYPmVAEnVQ&e= > >> > >> Best, > >> Shunsuke > >>> > >>> [1] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__github.com_oracle_qemu_tree_vfio-2Duser- > 2Dp3.1&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=XTpYsh5Ps2zJvtw6ogtti46 > atk736SI4vgsJiUKIyDE&m=lGbUrPHS1zjVVRi0P- > ukZ0TAabgefclz26Q56PgHzKw6oxFrtqgpv1g_aDF9hXZk&s=l8dGYrL2oJmcUoD22A > mQqbDukmY5UA_IfSAcmdMvnPI&e= - I believe that's the latest version, > Jagannathan Raman will know best > >>> [2] https://urldefense.proofpoint.com/v2/url?u=https- > 3A__github.com_nutanix_libvfio- > 2Duser&d=DwIDaQ&c=s883GpUCOChKOHiocYtGcg&r=XTpYsh5Ps2zJvtw6ogtti46a > tk736SI4vgsJiUKIyDE&m=lGbUrPHS1zjVVRi0P- > ukZ0TAabgefclz26Q56PgHzKw6oxFrtqgpv1g_aDF9hXZk&s=MqK4yRxBjOVOLppnN > k_TYpg7p5gUg2g1CW5Wt74up1E&e= > >>> ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-10-06 12:29 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CANXvt5oKt=AKdqv24LT079e+6URnfqJcfTJh0ajGA17paJUEKw@mail.gmail.com>
2023-08-23 6:09 ` [RFC] Proposal of QEMU PCI Endpoint test environment Manivannan Sadhasivam
2023-08-25 8:56 ` Shunsuke Mie
2023-09-21 9:11 ` Kishon Vijay Abraham I
2023-09-26 7:26 ` Christoph Hellwig
2023-09-26 9:47 ` Shunsuke Mie
2023-09-26 12:40 ` Vaishnav Achath
2023-10-03 4:56 ` Shunsuke Mie
2023-10-03 14:31 ` Jiri Kastner
[not found] <CAGNS4TbhS3XnCFAEi378+cSmJvGMdjN2oTv=tES36vbV4CaDuA@mail.gmail.com>
[not found] ` <CANXvt5qKxfU3p1eSK4fkzRFRBXHSVvSkJrnQRLKPkQjhsMGNzQ@mail.gmail.com>
2023-10-05 7:02 ` Mattias Nissler
2023-10-06 11:51 ` Shunsuke Mie
2023-10-06 12:00 ` Mattias Nissler
2023-10-06 12:07 ` Thanos Makatos
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox